120 research outputs found

    Identity, non-identity, and near-identity: Addressing the complexity of coreference

    Get PDF
    This article examines the mainstream categorical definition of coreference as "identity of reference." It argues that coreference is best handled when identity is treated as a continuum, ranging from full identity to non-identity, with room for near-identity relations to explain currently problematic cases. This middle ground is needed to account for those linguistic expressions in real text that stand in relations that are neither full coreference nor non-coreference, a situation that has led to contradictory treatment of cases in previous coreference annotation efforts. We discuss key issues for coreference such as conceptual categorization, individuation, criteria of identity, and the discourse model construct. We redefine coreference as a scalar relation between two (or more) linguistic expressions that refer to discourse entities considered to be at the same granularity level relevant to the linguistic and pragmatic context. We view coreference relations in terms of mental space theory and discuss a large number of real life examples that show near-identity at different degrees

    A classificación of Spanish pyschological verbs

    Get PDF
    The present paper is presented within the context of the research currently being carried out within the field of . Computational Lexicography at the University of Barcelona Linguistics Department - in collaboration with the University of Maryland Computer Science Department and provisionally called PIRAPIDES. The research deals with the study of verbal diathesis, subcategorization frames, S-grids and the definition of a typology of S-roles apt for the description of the argumental structure

    Comparing distributional semantic models for identifying groups of semantically related words

    Get PDF
    Distributional Semantic Models (DSM) are growing in popularity in Computational Linguistics. DSM use corpora of language use to automatically induce formal representations of word meaning. This article focuses on one of the applications of DSM: identifying groups of semantically related words. We compare two models for obtaining formal representations: a well known approach (CLUTO) and a more recently introduced one (Word2Vec). We compare the two models with respect to the PoS coherence and the semantic relatedness of the words within the obtained groups. We also proposed a way to improve the results obtained by Word2Vec through corpus preprocessing. The results show that: a) CLUTO outperformsWord2Vec in both criteria for corpora of medium size; b) The preprocessing largely improves the results for Word2Vec with respect to both criteria

    WRPA: A system for relational paraphrase acquisition from Wikipedia

    Get PDF
    In this paper we present WRPA, a system for Relational Paraphrase Acquisition from Wikipedia. WRPA extracts paraphrasing patterns that hold a particular relation between two entities taking advantage of Wikipedia structure. What is new in this system is that Wikipedia's exploitation goes beyond infoboxes, reaching itemized information embedded in Wikipedia pages. WRPA is language independent, assuming that there exists Wikipedia and shallow linguistic tools for that particular language, and also independent of the relation addressed

    Paraphrase concept and typology. A linguistically based and computationally oriented approach

    Get PDF
    In this paper, we present a critical analysis of the state of the art in the definition and typologies of paraphrasing. This analysis shows that there exists no characterization of paraphrasing that is comprehensive, linguistically based and computationally tractable at the same time. The following sets out to define and delimit the concept on the basis of the propositional content. We present a general, inclusive and computationally oriented typology of the linguistic mechanisms that give rise to form variations between paraphrase pairs

    Text as Scene: Discourse Deixis and Bridging Relations

    Get PDF
    This paper presents a new framework, "text as scene", which lays the foundations for the annotation of two coreferential links: discourse deixis and bridging relations. The incorporation of what we call textual and contextual scenes provides more flexible annotation guidelines, broad type categories being clearly differentiated. Such a framework that is capable of dealing with discourse deixis and bridging relations from a common perspective aims at improving the poor reliability scores obtained by previous annotation schemes, which fail to capture the vague references inherent in both these links. The guidelines presented here complete the annotation scheme designed to enrich the Spanish CESS-ECE corpus with coreference information, thus building the CESS-Ancora corpus

    Intensive use of lexicon and Corpus for WSD

    Get PDF
    [spa] El artículo trata sobre el uso de información lingüística en la Desambiguación Semántica Automática (DSA). Proponemos un método de DSA basado en conocimiento y no supervisado, que requiere sólo un corpus amplio, previamente etiquetado a nivel morfológico, y muy poco conocimiento gramatical. El proceso de DSA se realiza a través de los patrones sintácticos en los que una ocurrencia ambigua aparece, en base a la hipótesis de 'almost one sense per syntactic pattern'. Esta integración nos permite extraer información paradigmática y sintagmática del corpus relacionada con la ocurrencia ambigua. Usamos variantes de la información de EuroWordNet asociada a los sentidos y dos algoritmos de DSA. Presentamos los resultados obtenidos en la aplicación del método sobre la tarea Spanish lexical sample de Senseval-2. La metodología es fácilmente transferible a otras lenguas. [eng] The paper addresses the issue of how to use linguistic information in Word Sense Disambiguation (WSD). We introduce a knowledge-driven and unsupervised WSD method that requires only a large corpus previously tagged with POS and very little grammatical knowledge. The WSD process is performed taking into account the syntactic patterns in which the ambiguous occurrence appears, relaying in the hypothesis of "almost one sense per syntactic pattern". This integration allows us to obtain, from corpora, paradigmatic and syntagmatic information related to the ambiguous occurrence. We also use variants of EWN information for word senses and different WSD algorithms. We report the results obtained when applying the method on the Spanish lexical sample task in Senseval-2. This methodology is easily transportable to other languages

    Gramática para el análisis del diccionario VOX

    Get PDF
    La presente comunicación expone una parte del trabajo realizado en el marco del proyecto Esprit Acquilex, y trata de manera especifica de la gramática realizada para el análisis del diccionario VOX. Este proyecto consiste en un programa de investigación para el desarrollo de técnicas y metodologías que extraen la información contenida en diccionarios de uso público que disponen de una versión en soporte magnético (HRDs), con el objetivo de construir el componeñte léxico de sistemas de procesamiento del lenguaje natural

    Polarity analisys od reviews based on the omission of asymetric sentences

    Get PDF
    In this paper, we present a novel approach to polarity analysis of product reviews which detects and removes sentences with the opposite polarity to that of the entire document (asymmetric sentences) as a previous step to identify positive and negative reviews. We postulate that asymmetric sentences are morpho-syntactically more complex than symmetric ones (sentences with the same polarity to that of the entire document) and that it is possible to improve the detection of the polarity orientation of reviews by removing asymmetric sentences from the text. To validate this hypothesis, we measured the syntactic complexity of both types of sentences in a multi-domain corpus of product reviews and contrasted three relevant data configurations based on inclusion and omission of asymmetric sentences from the reviews

    Information theory-based compositional distributional semantics

    Full text link
    In the context of text representation, Compositional Distributional Semantics models aim to fuse the Distributional Hypothesis and the Principle of Compositionality. Text embedding is based on co-ocurrence distributions and the representations are in turn combined by compositional functions taking into account the text structure. However, the theoretical basis of compositional functions is still an open issue. In this article we define and study the notion of Information Theory-based Compositional Distributional Semantics (ICDS): (i) We first establish formal properties for embedding, composition, and similarity functions based on Shannon's Information Theory; (ii) we analyze the existing approaches under this prism, checking whether or not they comply with the established desirable properties; (iii) we propose two parameterizable composition and similarity functions that generalize traditional approaches while fulfilling the formal properties; and finally (iv) we perform an empirical study on several textual similarity datasets that include sentences with a high and low lexical overlap, and on the similarity between words and their description. Our theoretical analysis and empirical results show that fulfilling formal properties affects positively the accuracy of text representation models in terms of correspondence (isometry) between the embedding and meaning spaces
    • …
    corecore